16 - Lecture_06_1_SVD [ID:37290]
50 von 336 angezeigt

Hi, the topic of today's lecture is singular value decomposition.

We will see that this is a very good tool to judge how bad inverse problems discrete,

so finite dimensional linear inverse problems really are.

This is necessary because we saw last time that the discrete deconvolution problem is

not strictly ill-posed because it is a linear inverse problem where the matrix, which is

the forward map, is invertible and then all three of Hadevast conditions are satisfied,

but still practically inversion didn't quite work.

So there's a slight mismatch between theory and practice here, but this is only due to

the fact that we're not looking at the full infinite dimensional inverse problem, but

at a concrete finite dimensional discretization.

And this finite dimensionality hides the underlying ill-posedness of the infinite dimensional problem.

So we're kind of cheating by looking only at finitely many dimensions.

We're making the problem well-posed, but only by name because as we saw, practical inversion

still doesn't work.

So we need something in order to judge whether a given forward map in finite dimensions is

a problematic forward operator in the sense that it probably doesn't lend itself to a

naive inversion.

So in this section, we consider inverse problems of type F is A times U plus epsilon.

Now I'm not putting those bars here anymore because in this section we only have finite

dimension inverse problems where all quantities involved are vectors or matrices.

So F is in Rm, A is a matrix, m times n entries, U is an Rn vector, so this is supposed to

be an m, and epsilon is again an Rn vector.

So everything is finite dimensional, we're even allowing non-square forward mappings

here.

So these are linear and finite dimensional.

And we will try to understand these better and how they handle themselves in practice.

Before we start with singular vanity composition, we recall diagonalization of matrices, square

matrix A in Rn times n, so it has to be a square matrix, so n times n, not m times n

or something else, is called diagonalizable if there exists a matrix S, and this matrix

S can be written in terms of its column vectors and is an Rn times n matrix again, so each

of these is a vector, so we put n vectors of size n next to each other and we call them

a matrix.

Other matrix are a diagonal matrix D with diagonal entries lambda 1 until lambda n,

such that A can be written as the matrix product S times D times S inverse, so this has to

be an invertible matrix.

And then lambda i and vi are eigenvalues and eigenvectors of A.

So why is this the case?

First we consider S minus 1 applied to a vector vi, so what is that?

First S minus 1 times S, what is that?

That's the identity matrix, n times n, and this is S minus 1 times the matrix consisting

of the vi as its columns, and this means we can also, well by matrix multiplication theory

we can write this also as the matrix consisting of S minus 1 v1 up to S minus 1 vn, this means,

well what's i and n, this is the matrix consisting of 1 0 0 0, 0 1 0 and so on, and the last

one is 0 0 1.

This means, we call this Ei, so Ei is the vector with a 1 at the i-th position and 0

at the i-th position, so this means that S minus 1 of vi is Ei, which is exactly this

vector row number i.

Okay so that was quite easy, then that means that Av i is S d S minus 1 vi by what we just

saw, this is S times d times Ei, d times Ei, so these are diagonal matrix, so this means

that this is S times lambda i times Ei, because d consists of the lambda in the diagonal positions,

Teil einer Videoserie :

Zugänglich über

Offener Zugang

Dauer

00:59:40 Min

Aufnahmedatum

2021-10-27

Hochgeladen am

2021-10-28 10:36:04

Sprache

en-US

Einbetten
Wordpress FAU Plugin
iFrame
Teilen